play doom
Syntiant NDP200 Neural Decision Processor Gets Trained to Play DOOM Using Very Little Power - TechEBlog
A touchscreen McDonald's kiosk running DOOM is one thing, training a Syntiant NDP200 neural decision processor to to play the game using very little power is another. This chip is designed to run neural networks and for vision processing at under 1mW, all the while performing highly accurate processing on-device, including multi-sensor fusion, voice command recognition, as well as tamper detection. To train this processor to play DOOM, a lightweight version of the game was used, called VizDoom. Reinforcement learning was then employed to train a neural network consisting of several layers, with the first set being responsible for understanding what the network is seeing, and the last, taking action in response. A total of 600,000 parameters were required, which is significant because the NDP200 has just 640 kilobytes of onboard memory for neural-network parameters.
Teaching computers to play Doom is a blind alley for AI – here's an alternative
Games have long been used as testbeds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.
- Leisure & Entertainment > Games > Computer Games (1.00)
- Leisure & Entertainment > Games > Chess (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.55)
- Information Technology > Artificial Intelligence > Games > Poker (0.55)
- Information Technology > Artificial Intelligence > Games > Go (0.35)
An introduction to Deep Q-Learning: let's play Doom
At each time step, we receive a tuple (state, action, reward, new_state). We learn from it (we feed the tuple in our neural network), and then throw this experience. Our problem is that we give sequential samples from interactions with the environment to our neural network. And it tends to forget the previous experiences as it overwrites with new experiences. For instance, if we are in the first level and then the second (which is totally different), our agent can forget how to behave in the first level.
Can you teach a computer to play Doom like a human?
Google made headlines earlier this year when its AlphaGo AI defeated world champion Lee Se-Dol in the ancient board game Go. But a group of researchers want to push the boundaries and see how a computer might fare in a first-person shooter deathmatch. The 2016 Computational Intelligence and Games (CIG) Conference will host a competition to determine the best bot that's capable of winning a multiplayer round of Doom, while playing the way a human does. Our biggest ever edition of TNW Conference is fast approaching! That means that unlike enemy AI in video games, which have a complete overview of the level's map, locations of powerups and weapons, the bots will have to rely only on raw visual input that mimics what a human gamer sees when they play a game. The Visual AI Doom Competition will pit bots against each other in two tracks: the first one will see them playing a map known to their programmers with only rocket launchers and health boosts, while the other will feature an undisclosed map with all weapons and items available.
- Europe > Netherlands > North Holland > Amsterdam (0.07)
- Europe > Greece (0.07)
Can you teach a computer to play Doom like a human?
Google made headlines earlier this year when its AlphaGo AI defeated world champion Lee Se-Dol in the ancient board game Go. But a group of researchers want to push the boundaries and see how a computer might fare in a first-person shooter deathmatch. The 2016 Computational Intelligence and Games (CIG) Conference will host a competition to determine the best bot that's capable of winning a multiplayer round of Doom, while playing the way a human does. Some of the biggest names in tech are coming to TNW Conference in Amsterdam this May. That means that unlike enemy AI in video games, which have a complete overview of the level's map, locations of powerups and weapons, the bots will have to rely only on raw visual input that mimics what a human gamer sees when they play a game. The Visual AI Doom Competition will pit bots against each other in two tracks: the first one will see them playing a map known to their programmers with only rocket launchers and health boosts, while the other will feature an undisclosed map with all weapons and items available.
- Europe > Netherlands > North Holland > Amsterdam (0.27)
- Europe > Greece (0.07)